20 research outputs found

    Classification with Extreme Learning Machine and Ensemble Algorithms Over Randomly Partitioned Data

    Full text link
    In this age of Big Data, machine learning based data mining methods are extensively used to inspect large scale data sets. Deriving applicable predictive modeling from these type of data sets is a challenging obstacle because of their high complexity. Opportunity with high data availability levels, automated classification of data sets has become a critical and complicated function. In this paper, the power of applying MapReduce based Distributed AdaBoosting of Extreme Learning Machine (ELM) are explored to build reliable predictive bag of classification models. Thus, (i) dataset ensembles are build; (ii) ELM algorithm is used to build weak classification models; and (iii) build a strong classification model from a set of weak classification models. This training model is applied to the publicly available knowledge discovery and data mining datasets.Comment: In Turkish, SI

    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

    Get PDF
    Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.Publisher's VersionWOS:000757777400006PMID: 3522177

    Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty

    Get PDF
    Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model's final probability outputs, along with the model's own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model's decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.Publisher's VersionWOS:00077742940000

    Sensor based cyber attack detections in critical infrastructures using deep learning algorithms

    Get PDF
    The technology that has evolved with innovations in the digital world has also caused an increase in many security problems. Day by day, the methods and forms of cyberattacks are becoming more complicated; therefore, their detection has become more difficult. In this work, we have used datasets that have been prepared in collaboration with the Raymond Borges and Oak Ridge National Laboratories. These datasets include measurements of the Industrial Control Systems related to chewing attack behavior. These measurements include synchronized measurements and data records from Snort and relays with a simulated control panel. In this study, we developed two models using these datasets. The first is a model we call the DNN model, which was build using the latest deep learning algorithms. The second model was created by adding the AutoEncoder structure to the DNN model. All of the variables used when developing our models were set parametrically. A number of variables such as the activation method, the number of hidden layers in the model, the number of nodes in the layers, and the number of iterations were analyzed to create the optimum model design. When we run our model with optimum settings, we obtained better results than those found in related studies. The learning speed of the model has a 100% accuracy rate, which is also entirely satisfactory. While the training period of the dataset containing about 4 thousand different operations lasts for about 90 seconds, the developed model completes the learning process at a level of milliseconds to detect new attacks. This increases the applicability of the model in the real-world environment

    Detection of attack-targeted scans from the Apache HTTP Server access logs

    Get PDF
    A web application could be visited for different purposes. It is possible for a web site to be visited by a regular user as a normal (natural) visit, to be viewed by crawlers, bots, spiders, etc. for indexing purposes, lastly to be exploratory scanned by malicious users prior to an attack. An attack targeted web scan can be viewed as a phase of a potential attack and can lead to more attack detection as compared to traditional detection methods. In this work, we propose a method to detect attack-oriented scans and to distinguish them from other types of visits. In this context, we use access log files of Apache (or ISS) web servers and try to determine attack situations through examination of the past data. In addition to web scan detections, we insert a rule set to detect SQL Injection and XSS attacks. Our approach has been applied on sample data sets and results have been analyzed in terms of performance measures to compare our method and other commonly used detection techniques. Furthermore, various tests have been made on log samples from real systems. Lastly, several suggestions about further development have been also discussed

    Topluluk Yöntemlerine Dayalı Dağıtık Hizmet Dışı Bırakma Saldırılarının Algılanması

    No full text
    Dağıtık hizmet dışı bırakma eski bir siber saldırı yöntemi olmasına rağmen günümüzde saldırganlar tarafından hala kullanılmaktadır. Saldırganlar, internet üzerinde yer alan protokollerin mevcut zafiyetleri kullanılarak çeşitli katmanlarda bu tip saldırılar gerçekleştirmektedirler. Günümüzde makine öğrenmesi yöntemleri gelişen teknoloji ile beraber yüksek boyutlu veri kümelerine uygulanabilir olmaktadır. Siber saldırıların algılanması için kullanılacak olan veri kümeleri yüksek sayıda satırlar içeren log dosyalarıdır. Bu çalışmada dağıtık hizmet dışı bırakma saldırılarında elde edilmiş olan logların analiz edilerek tahmin modeli ortaya çıkarılması hedeflenmiştir. Topluluk yöntemleri kullanılarak, siber güvenlik veri kümeleri eğitilebilir duruma getirilmektedir. Farklı parametreler kullanılarak model performans ölçümü uygulanmıştır. Bu şekilde en yüksek doğruluğa sahip model oluşturulması hedeflenmiştir. Ortaya konulan modelin sınıflandırma performans ölçüsü tablo ve şekillerle paylaşılmıştır

    A generative model based adversarial security of deep learning and linear classifier models

    Get PDF
    In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. With the rapid developments of deep learning techniques, it is critical to take the security concern into account for the application of the algorithms. While machine learning offers significant advantages in terms of the application of algorithms, the issue of security is ignored. Since it has many applications in the real world, security is a vital part of the algorithms. In this paper, we have proposed a mitigation method for adversarial attacks against machine learning models with an autoencoder model that is one of the generative ones. The main idea behind adversarial attacks against machine learning models is to produce erroneous results by manipulating trained models. We have also presented the performance of autoencoder models to various attack methods from deep neural networks to traditional algorithms by using different methods such as non-targeted and targeted attacks to multi-class logistic regression, a fast gradient sign method, a targeted fast gradient sign method and a basic iterative method attack to neural networks for the MNIST dataset
    corecore